20 research outputs found

    Full Wafer Redistribution and Wafer Embedding as Key Technologies for a Multi-Scale Neuromorphic Hardware Cluster

    Full text link
    Together with the Kirchhoff-Institute for Physics(KIP) the Fraunhofer IZM has developed a full wafer redistribution and embedding technology as base for a large-scale neuromorphic hardware system. The paper will give an overview of the neuromorphic computing platform at the KIP and the associated hardware requirements which drove the described technological developments. In the first phase of the project standard redistribution technologies from wafer level packaging were adapted to enable a high density reticle-to-reticle routing on 200mm CMOS wafers. Neighboring reticles were interconnected across the scribe lines with an 8{\mu}m pitch routing based on semi-additive copper metallization. Passivation by photo sensitive benzocyclobutene was used to enable a second intra-reticle routing layer. Final IO pads with flash gold were generated on top of each reticle. With that concept neuromorphic systems based on full wafers could be assembled and tested. The fabricated high density inter-reticle routing revealed a very high yield of larger than 99.9%. In order to allow an upscaling of the system size to a large number of wafers with feasible effort a full wafer embedding concept for printed circuit boards was developed and proven in the second phase of the project. The wafers were thinned to 250{\mu}m and laminated with additional prepreg layers and copper foils into a core material. After lamination of the PCB panel the reticle IOs of the embedded wafer were accessed by micro via drilling, copper electroplating, lithography and subtractive etching of the PCB wiring structure. The created wiring with 50um line width enabled an access of the reticle IOs on the embedded wafer as well as a board level routing. The panels with the embedded wafers were subsequently stressed with up to 1000 thermal cycles between 0C and 100C and have shown no severe failure formation over the cycle time.Comment: Accepted at EPTC 201

    Accelerated physical emulation of Bayesian inference in spiking neural networks

    Get PDF
    The massively parallel nature of biological information processing plays an important role for its superiority to human-engineered computing devices. In particular, it may hold the key to overcoming the von Neumann bottleneck that limits contemporary computer architectures. Physical-model neuromorphic devices seek to replicate not only this inherent parallelism, but also aspects of its microscopic dynamics in analog circuits emulating neurons and synapses. However, these machines require network models that are not only adept at solving particular tasks, but that can also cope with the inherent imperfections of analog substrates. We present a spiking network model that performs Bayesian inference through sampling on the BrainScaleS neuromorphic platform, where we use it for generative and discriminative computations on visual data. By illustrating its functionality on this platform, we implicitly demonstrate its robustness to various substrate-specific distortive effects, as well as its accelerated capability for computation. These results showcase the advantages of brain-inspired physical computation and provide important building blocks for large-scale neuromorphic applications.Comment: This preprint has been published 2019 November 14. Please cite as: Kungl A. F. et al. (2019) Accelerated Physical Emulation of Bayesian Inference in Spiking Neural Networks. Front. Neurosci. 13:1201. doi: 10.3389/fnins.2019.0120

    Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

    Full text link
    Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.Comment: 8 pages, 10 figures, submitted to IJCNN 201

    Pattern representation and recognition with accelerated analog neuromorphic systems

    Full text link
    Despite being originally inspired by the central nervous system, artificial neural networks have diverged from their biological archetypes as they have been remodeled to fit particular tasks. In this paper, we review several possibilites to reverse map these architectures to biologically more realistic spiking networks with the aim of emulating them on fast, low-power neuromorphic hardware. Since many of these devices employ analog components, which cannot be perfectly controlled, finding ways to compensate for the resulting effects represents a key challenge. Here, we discuss three different strategies to address this problem: the addition of auxiliary network components for stabilizing activity, the utilization of inherently robust architectures and a training method for hardware-emulated networks that functions without perfect knowledge of the system's dynamics and parameters. For all three scenarios, we corroborate our theoretical considerations with experimental results on accelerated analog neuromorphic platforms.Comment: accepted at ISCAS 201

    optimLanduse : A package for multiobjective land‐cover composition optimization under uncertainty

    No full text
    1. How to simultaneously combat biodiversity loss and maintain ecosystem functioning while improving human welfare remains an open question. Optimization approaches have proven helpful in revealing the trade-offs between multiple functions and goals provided by land-cover configurations. The R package optimLanduse provides tools for easy and systematic applications of the robust multiobjective land-cover composition optimization approach of Knoke et al. (2016). 2.The package includes tools to determine the land-cover composition that best balances the multiple functions a landscape can provide, and tools for understanding and visualizing the reasoning behind these compromises. A tutorial based on a published dataset guides users through the application and highlights possible use-cases. 3. Illustrating the consequences of alternative ecosystem functions on the theoretically optimal landscape composition provides easily interpretable information for landscape modelling and decision-making. 4. The package opens the approach of Knoke et al. (2016) to the community of landscape modellers and planners and provides opportunities for straightforward systematic or batch applications.ISSN:2041-210XISSN:2041-209

    Severe and frequent extreme weather events undermine economic adaptation gains of tree-species diversification

    No full text
    Abstract Forests and their provision of ecosystem services are endangered by climate change. Tree-species diversification has been identified as a key adaptation strategy to balance economic risks and returns in forest stands. Yet, whether this synergy between ecology and economics persists under large-scale extreme weather events remains unanswered. Our model accounts for both, small-scale disturbances in individual stands and extreme weather events that cause spatio-temporally correlated disturbances in a large number of neighboring stands. It economically optimizes stand-type allocations in a large forest enterprise with multiple planning units. Novel components are: spatially explicit site heterogeneity and a comparison of economic diversification strategies under local and regionally coordinated planning by simplified measures for α\alpha α , β\beta β , and γ\gamma γ -diversity of stand types. α\alpha α -diversity refers to the number and evenness of stand types in local planning units, β\beta β -diversity to the dissimilarity of the species composition across planning units, and γ\gamma γ -diversity to the number and evenness of stand types in the entire enterprise. Local planning led to stand-type diversification within planning units ( α\alpha α -diversity), while regionally coordinated planning led to diversification across planning units ( β\beta β -diversity). We observed a trend towards homogenization of stand-type composition likely selected under economic objectives with increasing extreme weather events. No diversification strategy fully buffered the adverse economic consequences. This led to fatalistic decisions, i.e., selecting stand types with low investment risks but also low resistance to disturbances. The resulting forest structures indicate potential adverse consequences for other ecosystem services. We conclude that high tree-species diversity may not necessarily buffer economic consequences of extreme weather events. Forest policies reducing forest owners’ investment risks are needed to establish stable forests that provide multiple ecosystem services

    Evaluation of the Site Form as a Site Productive Indicator in Temperate Uneven-Aged Multispecies Forests in Durango, Mexico

    No full text
    Even though the site index is a popular method for describing forest productivity, its use is limited in uneven-aged multispecies forests. Accordingly, the site form (SF) is an alternative measure of productivity to the site index based on the tree height–diameter relationship. Our study aims to evaluate SF as a measure of productivity in the temperate uneven-aged multispecies forests of Durango, Mexico, applying three methods to estimate SF: (i) as the mean height of dominant trees at a reference diameter (SFH-D); (ii) as the expected mean height of dominant trees at a reference mean diameter (SFMH-MD), and (iii) as the expected height at a reference diameter for a given site (SFh-dbh). We assess the effectiveness of the SF based on two hypotheses: (i) the SF correlates to the total volume production, and (ii) the SF is independent of stand density. The SFH-D and the SFh-dbh showed a high correlation with productivity. However, they also did so with density. Contrary to this, the SFMH-MD had a weak correlation with density and productivity. We conclude that the SF is a suitable approach to describe site quality. Nonetheless, its effectiveness as a site quality indicator may be affected according to the method used
    corecore